ai expert call
AI Experts Call For Policy Action to Avoid Extreme Risks
On Tuesday, 24 AI experts, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, released a paper calling on governments to take action to manage risks from AI. The policy document had a particular focus on extreme risks posed by the most advanced systems, such as enabling large-scale criminal or terrorist activities. The paper makes a number of concrete policy recommendations, such as ensuring that major tech companies and public funders devote at least one-third of their AI R&D budget to projects that promote safe and ethical use of AI. The authors also call for the creation of national and international standards. Bengio, scientific director at the Montreal Institute for Learning Algorithms, says that the paper aims to help policymakers, the media, and the general public "understand the risks, and some of the things we have to do to make [AI] systems do what we want."
- North America > Canada > Quebec > Montreal (0.55)
- North America > United States > California > Alameda County > Berkeley (0.07)
- Asia > China (0.05)
- Government (1.00)
- Law Enforcement & Public Safety > Terrorism (0.55)
- Law > Statutes (0.50)
AI Experts Call For Pause In Development Of Advanced Systems - Dataconomy
On Tuesday, the Future of Life Institute published an open letter signed by around 1,000 AI experts and tech executives, including Elon Musk and Steve Wozniak, urging AI labs to pause the development of advanced AI systems that surpass GPT-4. The letter cites "profound risks" to human society as the reason for the call to action and urges a halt in the training of such systems for at least six months, which must be public, verifiable, and include all public actors. The group argues that AI systems with human-competitive intelligence pose significant risks to society and humanity, as demonstrated by extensive research and acknowledged by top AI labs. They believe that advanced AI systems could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources. However, they argue that this level of planning and management is not happening, as AI labs are engaged in a race to develop and deploy ever more powerful digital minds that no one, not even their creators, can understand, predict, or reliably control.
AI experts call to block publication of study on neural network that claims to 'predict' criminality
A group of more than 1,000 researchers and academics are calling on Springer to reconsider the publication of an upcoming study on neural networks that claims to'predict' criminality. In an open letter published this week, the group, which consists of experts in the field of statistics, machine learning and artificial intelligence, law, sociology, history, communication studies and anthropology, caution Springer, the publisher of Nature, over the publication of the study in an upcoming book series. Skeptics in the field of AI and machine-learning are calling on Springer to rescind its offer to publish a study on a'predictive policing' algorithm The study itself claims that an automated facial recognition software can be used as a'predictive policing' tool for law enforcement that can identify criminals before they commit crimes. 'We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,' said co-author of the study and Harrisberg University Professor Roozbeh Sadeghian in a statement. 'This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.'
AI experts call for 'bias bounties' to boost ethics scrutiny – Government & civil service news
Experts from the private sector and leading research labs in the US and Europe have joined forces to create a toolkit for turning AI ethics principles into practice. The preprint paper, published last week, advocates paying people for finding risks of bias in artificial intelligence (AI) systems – adapting a model used to check the security of new computer systems, in which hackers are paid'bounties' for identifying weaknesses. The paper also proposes better linking independent third-party auditing operations and government policies to foster a market in regulatory systems, and suggests that governments increase funding for researchers in academia to verify performance claims made by industry. The 80-page paper, Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims, has been put together by AI specialists from 30 organisations including Google Brain, Intel, OpenAI, Stanford University and the Leverhulme Centre for the Future of Intelligence. "In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, there is a need to move beyond [ethics] principles to a focus on mechanisms for demonstrating responsible behaviour," the executive summary reads.
- North America > United States (0.26)
- Europe (0.26)
- Oceania > New Zealand (0.06)
- (2 more...)
- Government (1.00)
- Law > Statutes (0.32)
AI expert calls for end to UK use of 'racially biased' algorithms
An expert on artificial intelligence has called for all algorithms that make life-changing decisions – in areas from job applications to immigration into the UK – to be halted immediately. Prof Noel Sharkey, who is also a leading figure in a global campaign against "killer robots", said algorithms were so "infected with biases" that their decision-making processes could not be fair or trusted. A moratorium must be imposed on all "life-changing decision-making algorithms" in Britain, he said. Sharkey has suggested testing AI decision-making machines in the same way as new pharmaceutical drugs are vigorously checked before they are allowed on to the market. In an interview with the Guardian, the Sheffield University robotics/AI pioneer said he was deeply concerned over a series of examples of machine-learning systems being loaded with bias.
- Europe > United Kingdom (0.25)
- North America > United States (0.05)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.74)
- Government (0.58)
AI experts call to curb mass surveillance
The EU's top AI experts say regulation should focus on high-risk applications. Europe needs rules to make sure artificial intelligence won't be used to build up a China-style high-tech surveillance state, the European Union's top AI experts warn. An expert panel is set to present to the bloc's leaders a list of 33 recommendations on how to move forward on AI governance Wednesday, including a stark warning against the use of AI to control and monitor citizens. In a 48-page final draft of the document, obtained by POLITICO, the experts urge policymakers to define "red lines" for high-risk AI applications -- such as systems to mass monitor individuals or rank them according to their behavior -- and discuss outlawing some controversial technology. "Ban AI-enabled mass-scale scoring of individuals," the expert group demands, adding that there needs to be "very clear and strict rules for surveillance for national security purposes and other purposes claimed to be in the public or national interest."
- Europe > Germany (0.15)
- North America > United States (0.05)
- Europe > Serbia (0.05)
- Asia > China > Beijing > Beijing (0.05)
- Information Technology > Security & Privacy (0.86)
- Government > Regional Government > Europe Government (0.70)
Google pledges to not work on weapons after Project Maven backlash
Google has pledged to never work on artificial intelligence weapons projects, laying down the principle after a collaboration with the US military fomented an employee revolt. The technology giant recently announced it would discontinue work with the Department of Defense on Project Maven, an artificial intelligence project that analyses imagery and could be used to enhance the efficiency of drone strikes. Thousands of employees had signed onto a letter warning that Google's participation contravened the company's ethical tenets. Stating that "Google should not be in the business of war", the letter warned that the company's involvement would compromise its image and drive away potential employees. Google'ditches contract with US military' after employee revolt Google quietly removes'don't be evil' preface from code of conduct Hundreds of AI experts call on Google to stop weaponizing technology Google'ditches contract with US military' after employee revolt Google quietly removes'don't be evil' preface from code of conduct A blog post by CEO Sundar Pichai addressed the underlying debate, establishing guidelines for future artificial intelligence (AI) projects that pledged to ensure the work benefits society and eschew "technologies that cause or are likely to cause overall harm".
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Hundreds of AI experts call on Google to stop weaponizing technology as employees resign in protest
Artificial intelligence researchers have called on Google to abandon a project developing AI technology for the military, warning that autonomous weapons directly contradict the firm's famous'Don't Be Evil' motto. The experts join more than 3,100 of Google's own employees, who signed an open letter last month protesting the company's involvement in a controversial Pentagon program called Project Maven. The partnership between the technology giant and the US Military involves using customised AI surveillance software to analyse data from drone footage in order to better recognise target objects, such as distinguishing between a human on the ground and a vehicle. Around a dozen employees have reportedly resigned in protest at Google's refusal to cut ties with the US military, each one citing ethical concerns to Gizmodo. Google did not respond to a request for comment from The Independent.
- Government > Military (0.76)
- Information Technology > Services (0.54)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (0.38)
- Information Technology > Communications > Social Media (0.36)
AI Experts Call for Boycott of South Korea University That May Be Developing Killer Robots
I grew up in a plattenbau neighborhood, in a micro-district inspired by Le Corbusier. It is fashionable to despise this architectural style, but I must admit that I secretly love factory-built assembled plattenbau projects. The thing is, I love them on the design sketches, on maquette miniatures, I love them in the imagination of their architects, I love them on 70s postcards of an idyllic, innocent urban landscapes of heartland's France towns social housing developments (which we know ended up eventually converting into ethnic ghettos and welfare-subsidised slums). They say mass-production buildings and vast spaces create ghettos. They blame Le Corbusier for their misery.
- Asia > South Korea (0.40)
- Europe > France (0.26)